Combining Pairwise Classifiers with Stacking
نویسندگان
چکیده
Pairwise classification is the technique that deals with multi-class problems by converting them into a series of binary problems, one for each pair of classes. The predictions of the binary classifiers are typically combined into an overall prediction by voting and predicting the class that received the largest number of votes. In this paper we try to generalize the voting procedure by replacing it with a trainable classifier, i.e., we propose the use of a meta-level classifier that is trained to arbiter among the conflicting predictions of the binary classifiers. In our experiments, this yielded substantial gains on a few datasets, but no gain on others. These performance differences do not seem to depend on quantitative parameters of the datasets, like the number of classes.
منابع مشابه
Combining Multiple Pairwise Neural Networks Classifiers: A Comparative Study
Classifier combination constitutes an interesting approach when solving multiclass classification problems. We review standard methods used to decode the decomposition generated by a one-against-one approach. New decoding methods are proposed and are compared to standard methods. A stacking decoding is also proposed and consists in replacing the whole decoding by a trainable classifier to arbit...
متن کاملCombined Binary Classifiers with Applica
Many applications require classification of examples into one of several classes. A common way of designing such classifiers is to determine the class based on the outputs of several binary classifiers. We consider some of the most popular methods for combining the decisions of the binary classifiers, and improve existing bounds on the error rates of the combined classifier over the training se...
متن کاملCombined binary classifiers with applications to speech recognition
Many applications require classification of examples into one of several classes. A common way of designing such classifiers is to determine the class based on the outputs of several binary classifiers. We consider some of the most popular methods for combining the decisions of the binary classifiers, and improve existing bounds on the error rates of the combined classifier over the training se...
متن کاملIs Combining Classifiers Better than Selecting the Best One
We empirically evaluate several state-of-theart methods for constructing ensembles of heterogeneous classifiers with stacking and show that they perform (at best) comparably to selecting the best classifier from the ensemble by cross validation. We then propose a new method for stacking, that uses multi-response model trees at the meta-level, and show that it clearly outperforms existing stacki...
متن کاملA comparison of stacking with meta decision trees to other combining methods
Meta decision trees (MDTs) are a method for combining multiple classifiers. We present an integration of the algorithm MLC4.5 for learning MDTs into the Weka data mining suite. We compare classifier ensembles combined with MDTs to bagged and boosted decision trees, and to classifier ensembles combined with other methods: voting, grading, multi-scheme and stacking with multi-response linear regr...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2003